59 research outputs found

    Model-Based Feature Selection Based on Radial Basis Functions and Information Measures

    Get PDF
    In this paper the development of a new embedded feature selection method is presented, based on a Radial-Basis-Function Neural-Fuzzy modelling structure. The proposed method is created to find the relative importance of features in a given dataset (or process in general), with special focus on manufacturing processes. The proposed approach evaluates the impact/importance of processes features by using information theoretic measures to measure the correlation between the process features and the modelling performance. Crucially, the proposed method acts during the training of the process model; hence it is an embedded method, achieving the modelling/classification task in parallel to the feature selection task. The latter is achieved by taking advantage of the information in the output layer of the Neural Fuzzy structure; in the presented case this is a TSK-type polynomial function. Two information measures are evaluated in this work, both based on information entropy: mutual information, and cross-sample entropy. The proposed methodology is tested against two popular datasets in the literature (IRIS - plant data, AirFoil - manufacturing/design data), and one more case study relevant to manufacturing - the heat treatment of steel. Results show the good and reliable performance of the developed modelling structure, on par with existing published work, as well as the good performance of the feature selection task in terms of correctly identifying important process features

    Transient thermography for flaw detection in friction stir welding : a machine learning approach

    Get PDF
    A systematic computational method to simulate and detect sub-surface flaws, through non-destructive transient thermography, in aluminium sheets and friction stir welded sheets is proposed. The proposed method relies on feature extraction methods and a data driven machine learning modelling structure. In this work, we propose the use of a multi-layer perceptron feed-forward neural-network with feature extraction methods to improve the flaw-probing depth of transient thermography inspection. Furthermore, for the first time, we propose Thermographic Signal Linear Modelling (TSLM), a hyper-parameterfree feature extraction technique for transient thermography. The new feature extraction and modelling framework was tested with out-of-sample experimental transient thermography data and results show effectiveness in sub-surface flaw detection of up to 2.3 mm deep in aluminium sheets (99.8 % true positive rate, 92.1 % true negative rate) and up to 2.2 mm deep in friction stir welds (97.2 % true positive rate, 87.8 % true negative rate)

    Interpretable machine learning: Convolutional neural networks with RBF fuzzy logic classification rules

    Get PDF
    A convolutional neural network (CNN) learning structure is proposed, with added interpretability-oriented layers, in the form of Fuzzy Logic-based rules. This is achieved by creating a classification layer based on a Neural-Fuzzy classifier, and integrating it into the overall learning mechanism within the deep learning structure. Using this new structure, one could extract linguistic Fuzzy Logic-based rules from the deep learning structure directly, which enhances the interpretability of the overall system. The classification layer is realised via a Radial Basis Function (RBF) Neural-Network, that is a direct equivalent of a class of Fuzzy Logic-based systems. In this work, the development of the RBF neural-fuzzy system and its integration into the deep-learning CNN is presented. The proposed hybrid CNN RBF-NF structure can from a fundamental building block, towards building more complex deep-learning structures with Fuzzy Logic-based interpretability. Using simulation results on a benchmark data-driven modelling and classification problem (labelled handwriting digits, MNIST 70000 samples) we show that the proposed learning structure maintains a good level of forecasting/prediction accuracy (> 96% on unseen data) compared to state-of-the-art CNN deep learning structures, while providing linguistic interpretability to the classification layer

    Iterative Information Granulation for Novelty Detection in Complex Datasets

    Get PDF
    Recognition memory in a number of mammals is usually utilised to identify novel objects that violate model predictions. In humans in particular, the recognition of novel objects is foremost associated to their ability to group objects that are highly compatible/similar. Granular computing not only mimics the human cognition to draw objects together but also mimics the ability to capture associated properties by similarity, proximity or functionality. In this paper, an iterative information granulation approach is presented, for the problem of novelty detection in complex data. Two granular compatibility measures are used, based on principles of Granular Computing, namely the multidimensional distance between the granules, as well as the granular density and volume. A two-stage iterative information granulation is proposed in this work. In the first stage, a predefined number of granular detectors are constructed. The granular detectors capture the relationships (rules) between the input-output data and then use this information in a second granulation stage in order to discriminate new samples as novel. The proposed iterative information granulation approach for novelty detection is then applied to three different benchmark problems in pattern recognition demonstrating very good performance

    An ensemble data-driven fuzzy network for laser welding quality prediction

    Get PDF
    This paper presents an Ensemble Data-Driven Fuzzy Network (EDDFN) for laser welding quality prediction that is composed of a number of strategically selected Data-Driven Fuzzy Models (DDFMs). Each model is trained by an Adaptive Negative Correlation Learning approach (ANCL). A monitoring system provides quality-relevant information of the laser beam spectrum and the geometry of the melt pool. This information is used by the proposed ensemble model to asist in the prediction of the welding quality. Each DDFM is based on three conceptual components, i.e. a selection procedure of the most representative welding information, a granular comprehesion process of data and the construction of a fuzzy reasoning mechanism as a series of Radial Basis Function Neural Networks (RBF-NNs). The proposed model aims at providing a fuzzy reasoning engine that is able to preserve a good balance between transparency and accuracy while improving its prediction properties. We apply the EDDFN to a real case study in manufacturing industry for the prediction of welding quality. The corresponding results confirm that the EDDFN provides better prediction properties compared to a single DDFM with an overal prediction performance > 78%

    Absolute electrical impedance tomography (aEIT) guided ventilation therapy in critical care patients: simulations and future trends

    Get PDF
    Thoracic electrical impedance tomography (EIT) is a noninvasive, radiation-free monitoring technique whose aim is to reconstruct a cross-sectional image of the internal spatial distribution of conductivity from electrical measurements made by injecting small alternating currents via an electrode array placed on the surface of the thorax. The purpose of this paper is to discuss the fundamentals of EIT and demonstrate the principles of mechanical ventilation, lung recruitment, and EIT imaging on a comprehensive physiological model, which combines a model of respiratory mechanics, a model of the human lung absolute resistivity as a function of air content, and a 2-D finite-element mesh of the thorax to simulate EIT image reconstruction during mechanical ventilation. The overall model gives a good understanding of respiratory physiology and EIT monitoring techniques in mechanically ventilated patients. The model proposed here was able to reproduce consistent images of ventilation distribution in simulated acutely injured and collapsed lung conditions. A new advisory system architecture integrating a previously developed data-driven physiological model for continuous and noninvasive predictions of blood gas parameters with the regional lung function data/information generated from absolute EIT (aEIT) is proposed for monitoring and ventilator therapy management of critical care patients

    Multi-criteria decision making using Fuzzy Logic and ATOVIC with application to manufacturing

    Get PDF
    In this paper multi-criteria decision making (MCDM) is investigated as a framework for classification of part quality in a manufacturing process. The importance of linguistic interpretability of decisions is highlighted, and a new framework relying on the integration of Fuzzy Logic and an existing MCDM method is proposed. ATOVIC, previously developed as a TOPSIS-VIKOR-based MCDM framework is enhanced with a Fuzzy Logic framework for decision making - Fuzzy-ATOVIC. This research work demonstrates how to add linguistic interpretability to decisions made by the MCDM framework. This contributes to explainable decisions, which can be crucial on numerous domains, for example on safety-critical manufacturing processes. The case study presented is the one of ultrasonic inspection of plastic pipes, where thermomechanical joining is a critical part of the manufacturing process. The proposed framework is used to classify (take decisions) on the quality of manufactured parts using ultrasonic images around the joint region of the pipes. For comparison, both the original and the Fuzzy Logic-enhanced MCDM methods are contrasted using data from manufacturing trials and subsequent ultrasonic testing. It is shown, that Fuzzy-ATOVIC provides a framework for linguistic interpretability while the performance is the same or better compared to the original MCDM framework

    Long-term learning for type-2 neural-fuzzy systems

    Get PDF
    The development of a new long-term learning framework for interval-valued neural-fuzzy systems is presented for the first time in this article. The need for such a framework is twofold: to address continuous batch learning of data sets, and to take advantage the extra degree of freedom that type-2 Fuzzy Logic systems offer for better model predictive ability. The presented long-term learning framework uses principles of granular computing (GrC) to capture information/knowledge from raw data in the form of interval-valued sets in order to build a computational mechanism that has the ability to adapt to new information in an additive and long-term learning fashion. The latter, is to accommodate new input–output mappings and new classes of data without significantly disturbing existing input–output mappings, therefore maintaining existing performance while creating and integrating new knowledge (rules). This is achieved via an iterative algorithmic process, which involves a two-step operation: iterative rule-base growth (capturing new knowledge) and iterative rule-base pruning (removing redundant knowledge) for type-2 rules. The two-step operation helps create a growing, but sustainable model structure. The performance of the proposed system is demonstrated using a number of well-known non-linear benchmark functions as well as a highly nonlinear multivariate real industrial case study. Simulation results show that the performance of the original model structure is maintained and it is comparable to the updated model's performance following the incremental learning routine. The study is concluded by evaluating the performance of the proposed framework in frequent and consecutive model updates where the balance between model accuracy and complexity is further assessed

    In-situ porosity prediction in metal powder bed fusion additive manufacturing using spectral emissions: a prior-guided machine learning approach

    Get PDF
    Numerous efforts in the additive manufacturing literature have been made toward in-situ defect prediction for process control and optimization. However, the current work in the literature is limited by the need for multi-sensory data in appropriate resolution and scale to capture defects reliably and the need for systematic experimental and data-driven modeling validation to prove utility. For the first time in literature, we propose a data-driven neural network framework capable of in-situ micro-porosity localization for laser powder bed fusion via exclusively within hatch strip of sensory data, as opposed to a three-dimensional neighborhood of sensory data. We further propose using prior-guided neural networks to utilize the often-abundant nominal data in the form of a prior loss, enabling the machine learning structure to comply more with process physics. The proposed methods are validated via rigorous experimental data sets of high-strength aluminum A205 parts, repeated k-fold cross-validation, and prior-guided validation. Using exclusively within hatch stripe data, we detect and localize porosity with a spherical equivalent diameter (SED) smaller than 50.00μm with a classification accuracy of 73.13 +- 1.57%. This is the first work in the literature demonstrating in-situ localization of porosities as small as 38.12μm SED and is more than a five-fold improvement on the smallest SED porosity localization via spectral emissions sensory data in the literature. In-situ localizing micro-porosity using exclusively within hatch-stripe data is a significant step towards within-layer defect mitigation, advanced process feedback control, and compliance with the reliability certification requirements of industries such as the aerospace industry

    An entropy-based uncertainty measure for developing granular models

    Get PDF
    There are two main ways to construct Fuzzy Logic rule-based models: using expert knowledge and using data mining methods. One of the most important aspects of Granular Computing (GrC) is to discover and extract knowledge from raw data in the form of information granules. The knowledge gained from the GrC, the information granules, can be used in constructing the linguistic rule-bases of a Fuzzy-Logic based system. Algorithms for iterative data granulation in the literature, so far, do not account for data uncertainty during the granulation process. In this paper, the uncertainty during the data granulation process is captured using the fundamental concept in information theory, entropy. In the proposed GrC algorithm, data granules are defined as information objects, hence the entropy measure being used in this research work is to capture the uncertainty in the data vectors resulting from the merging of the information granules. The entropy-based uncertainty measure is used to guide the iterative granulation process, hence promoting the formation of new granules with less uncertainty. The enhanced information granules are then being translated into a Fuzzy Logic inference system. The effectiveness of the proposed approach is demonstrated using established datasets
    • …
    corecore